Random Prism: a noise-tolerant alternative to Random Forests
نویسندگان
چکیده
منابع مشابه
Random Prism: a noise-tolerant alternative to Random Forests
Ensemble learning can be used to increase the overall classification accuracy of a classifier by generating multiple base classifiers and combining their classification results. A frequently used family of base classifiers for ensemble learning are decision trees. However, alternative approaches can potentially be used, such as the Prism family of algorithms which also induces classification ru...
متن کاملRandom Prism: An Alternative to Random Forests
Ensemble learning techniques generate multiple classifiers, so called base classifiers, whose combined classification results are used in order to increase the overall classification accuracy. In most ensemble classifiers the base classifiers are based on the Top Down Induction of Decision Trees (TDIDT) approach. However, an alternative approach for the induction of rule based classifiers is th...
متن کامل1 Random Forests - - Random Features
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The error of a forest of tree classifiers depends on the strength of the individual tre...
متن کاملRandom super-prism wavelength meter.
The speckle pattern arising from a thin random, disordered scatterer may be used to detect the transversal mode of an incident beam. On the other hand, speckle patterns originating from meter-long multimode fibers can be used to detect different wavelengths. Combining these approaches, we develop a method that uses a thin random scattering medium to measure the wavelength of a near-infrared las...
متن کاملRandom Multiclass Classification: Generalizing Random Forests to Random MNL and Random NB
Random Forests (RF) is a successful classifier exhibiting performance comparable to Adaboost, but is more robust. The exploitation of two sources of randomness, random inputs (bagging) and random features, make RF accurate classifiers in several domains. We hypothesize that methods other than classification or regression trees could also benefit from injecting randomness. This paper generalizes...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Expert Systems
سال: 2013
ISSN: 0266-4720
DOI: 10.1111/exsy.12032